This paper studies the multiplicity-correction effect of standard Bayesianvariable-selection priors in linear regression. Our first goal is to clarifywhen, and how, multiplicity correction happens automatically in Bayesiananalysis, and to distinguish this correction from the Bayesian Ockham's-razoreffect. Our second goal is to contrast empirical-Bayes and fully Bayesianapproaches to variable selection through examples, theoretical results andsimulations. Considerable differences between the two approaches are found. Inparticular, we prove a theorem that characterizes a surprising aymptoticdiscrepancy between fully Bayes and empirical Bayes. This discrepancy arisesfrom a different source than the failure to account for hyperparameteruncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, whenthe empirical-Bayes estimate converges asymptotically to the truevariable-inclusion probability, the potential for a serious difference remains.
展开▼